Research Finds: Large Language Models May Be More Likely to Lie Than Admit Ignorance
Recently, scientists at the Polytechnic University of Valencia in Spain conducted a study revealing that large language models such as GPT, LLaMA, and BLOOM tend to lie rather than admit ignorance when faced with questions. The research found that as the complexity of AI models increases, their accuracy in tackling complex issues decreases, and they are more likely to fabricate answers. Researchers noted that human volunteers also struggled to identify these incorrect answers during tests, suggesting a potential risk posed by AI lies to humans. Scientists recommend improving AI.